33 research outputs found

    Vision-based control of near-obstacle flight

    Get PDF
    Lightweight micro unmanned aerial vehicles (micro-UAVs) capable of autonomous flight in natural and urban environments have a large potential for civil and commercial applications, including environmental monitoring, forest fire monitoring, homeland security, traffic monitoring, aerial imagery, mapping and search and rescue. Smaller micro-UAVs capable of flying inside houses or small indoor environments have further applications in the domain of surveillance, search and rescue and entertainment. These applications require the capability to fly near to the ground and amongst obstacles. Existing UAVs rely on GPS and AHRS (attitude heading reference system) to control their flight and are unable to detect and avoid obstacles. Active distance sensors such as radars or laser range finders could be used to measure distances to obstacles, but are typically too heavy and power-consuming to be embedded on lightweight systems. In this thesis, we draw inspiration from biology and explore alternative approaches to flight control that allow aircraft to fly near obstacles. We show that optic flow can be used on flying platforms to estimate the proximity of obstacles and propose a novel control strategy, called optiPilot, for vision-based near-obstacle flight. Thanks to optiPilot, we demonstrate for the first time autonomous near-obstacle flight of micro-UAVs, both indoor and outdoor, without relying on an AHRS nor external beacons such as GPS. The control strategy only requires a small series of optic flow sensors, two rate gyroscopes and an airspeed sensor. It can run on a tiny embedded microcontroller in realtime. Despite its simplicity, optiPilot is able to fully control the aircraft, including altitude regulation, attitude stabilisation, obstacle avoidance, landing and take-off. This parsimony, inherited from the biology of flying insects, contrasts with the complexity of the systems used so far for flight control while offering more capabilities. The results presented in this thesis contribute to a better understanding of the minimal requirements, in terms of sensing and control architecture, that enable animals and artificial systems to fly and bring closer to reality the perspective of using lightweight and inexpensive micro-UAV for civilian purposes

    Vision-based control of near-obstacle flight

    Get PDF
    This paper presents a novel control strategy, which we call optiPilot, for autonomous flight in the vicinity of obstacles. Most existing autopilots rely on a complete 6-degree-of-freedom state estimation using a GPS and an Inertial Measurement Unit (IMU) and are unable to detect and avoid obstacles. This is a limitation for missions such as surveillance and environment monitoring that may require near-obstacle flight in urban areas or mountainous environments. OptiPilot instead uses optic flow to estimate proximity of obstacles and avoid them. Our approach takes advantage of the fact that, for most platforms in translational flight (as opposed to near-hover flight), the translatory motion is essentially aligned with the aircraft main axis. This property allows us to directly interpret optic flow measurements as proximity indications. We take inspiration from neural and behavioural strategies of flying insects to propose a simple mapping of optic flow measurements into control signals that requires only a lightweight and power-efficient sensor suite and minimal processing power. In this paper, we first describe results obtained in simulation before presenting the implementation of optiPilot on a real flying platform equipped only with lightweight and inexpensive optic computer mouse sensors, MEMS rate gyroscopes and a pressure-based airspeed sensor. We show that the proposed control strategy not only allows collision-free flight in the vicinity of obstacles, but is also able to stabilise both attitude and altitude over flat terrain. These results shed new light on flight control by suggesting that the complex sensors and processing required for 6 degree-of-freedom state estimation may not be necessary for autonomous flight and pave the way toward the integration of autonomy into current and upcoming gram-scale flying platform

    Flying over the reality gap: From simulated to real indoor airships

    Get PDF
    Because of their ability to naturally float in the air, indoor airships (often called blimps) constitute an appealing platform for research in aerial robotics. However, when confronted to long lasting experiments such as those involving learning or evolutionary techniques, blimps present the disadvantage that they cannot be linked to external power sources and tend to have little mechanical resistance due to their low weight budget. One solution to this problem is to use a realistic flight simulator, which can also significantly reduce experimental duration by running faster than real time. This requires an efficient physical dynamic modelling and parameter identification procedure, which are complicated to develop and usually rely on costly facilities such as wind tunnels. In this paper, we present a simple and efficient physics-based dynamic modelling of indoor airships including a pragmatic methodology for parameter identification without the need for complex or costly test facilities. Our approach is tested with an existing blimp in a vision-based navigation task. Neuronal controllers are evolved in simulation to map visual input into motor commands in order to steer the flying robot forward as fast as possible while avoiding collisions. After evolution, the best individuals are successfully transferred to the physical blimp, which experimentally demonstrates the efficiency of the proposed approac

    Ishtar: a flexible and lightweight software for remote data access

    Get PDF
    In this paper, we present Ishtar, a lightweight and versatile collection of software for remote data access and monitoring. The monitoring architecture is crucial during the development and experimentation of autonomous systems like Micro Air Vehicles. Ishtar comprises a flexible communication layer that allows enumeration, inspection and modification of data in the remote system. The protocol is designed to be robust to the data loss and corruption that typically arises with small autonomous system, while remaining efficient in its bandwidth use. In addition to the communication layer, Ishtar offers a flexible graphical software that allows to monitor the remote system, graph and log its data and display them using a completely customisable cockpit. Emphasis is put on the flexibility to allow Ishtar to be used with arbitrary platforms and experimental paradigms. The software is designed to be cross-platform (compatible with Windows, Mac OS and Linux) and cross-architecture (it is compatible with both microcontroller- and embedded-PC-based remote systems). Finally, Ishtar is open source and can therefore be extended and customised freely by the user community

    Autonomous flight at low altitude using light sensors and little computational power

    Get PDF
    The ability to fly at low altitude while actively avoiding collisions with the terrain and objects such as trees and buildings is a great challenge for small unmanned aircraft. This paper builds on top of a control strategy called optiPilot whereby a series of optic-flow detectors pointed at divergent viewing directions around the aircraft main axis are linearly combined into roll and pitch commands using two sets of weights. This control strategy already proved successful at controlling flight and avoiding collisions in reactive navigation experiments. This paper describes how optiPilot can efficiently steer a flying platform during the critical phases of hand-launched take off and landing. It then shows how optiPilot can be coupled with a GPS in order to provide goal-directed, nap-of-the-earth flight control in presence of obstacles. Two fully autonomous flights of 25 minutes each are described where a 400-gram unmanned aircraft flies at approx. 10 m above ground in a circular path including two copses of trees requiring efficient collision avoidance actions

    optiPilot: control of take-off and landing using optic flow

    Get PDF
    Take-off and landing manoeuvres are critical for MAVs because GPS-based autopilots usually do not perceive distance to the ground or other potential obstacles. In addition, attitude estimates based on inertial sensors are often perturbed by the strong accelerations occurring during launch. This paper shows how our previously developed control strategy, called optiPilot, can cope with take-off and landing using a small set of inexpensive optic flow sensors

    Vision-based Navigation from Wheels to Wings

    Get PDF
    We describe an incremental approach towards the development of autonomous indoor flyers that use only vision to navigate in textured environments. In order to cope with the severe weight and energy constraints of such systems, we use spiking neural controllers that can be implemented in tiny micro-controllers and map visual information into motor commands. The network morphology is evolved by means of an evolutionary process on the physical robots. This methodology is tested in three robots of increasing complexity, from a wheeled robot to a dirigible to a winged robot. The paper describes the approach, the robots, their degrees of complexity, and summarizes results. In addition, three compatible electronic boards and a choice of vision sensors suitable for these robots are described in more details. These boards allow a comparative and gradual development of spiking neural controllers for flying robots

    Optic-flow-based Altitude and Forward Speed Regulation using Stereotypical Lateral Movements

    Get PDF
    We propose a novel optic-flow-based flight control strategy, inspired by recent observations and hypothesis by Baird (unpublished), to regulate independently forward speed and altitude. Unlike previous approaches where longitudinal ventral optic flow was used to regulate both forward speed and altitude, we suggest to use transversal ventral optic flow generated by a stereotyped lateral oscillation to regulate altitude. Longitudinal ventral optic flow is still used to regulate forward speed. The main advantage of this strategy is to allow any combination of forward speed and altitude, which is not possible by using exclusively longitudinal ventral optic flow. We show that this control strategy allows to control a helicopter- or insect-like simulated agent with any combination of forward speed and altitude. Moreover, thanks to a modulation of the open-loop oscillatory drive of the roll behaviour, this strategy achieves roll stabilisation

    Optic flow to control small UAVs

    Get PDF
    Autonomous flight in confined or cluttered environments such as houses or urban canyons requires high manoeuvrability, fast mapping from sensors to actuators and very limited overall system weight. Although flying animals are well capable of coping with such situations, roboticists still have difficulties at reproducing such capabilities. This paper describes how we took inspiration from flying insects to progress toward the goal of developing small UAVs able to dynamically fly in cluttered environments. This endeavour allowed us to demonstrate a 10-gram microflyer capable of fully autonomous operation in an office-sized room using fly-inspired vision, inertial and airspeed sensors. This encouraging result is now being ported to outdoor scenarios such as low-altitude flight in urban or mountainous environments. Important is that these autonomous capabilities are achieved without the help of GPS nor active range finders, which allows to develop very lightweight autopilots
    corecore